A recent Beyond Identity survey analyzed how convincing generative artificial intelligence (AI) software ChatGPT was at tricking individuals. The respondents were asked to review different schemes and express whether they’d be susceptible — and if not, to identify the factors that aroused suspicion. Thirty-nine percent said they would fall victim to at least one of the phishing messages, 49% would be tricked into downloading a fake ChatGPT app and 13% have used AI to generate passwords.
As part of the survey, ChatGPT drafted phishing emails, texts and posts and respondents were asked to identify which were believable. Of the 39% that said they would fall victim to at least one of the options, the social media post scam (21%) and text message scam (15%) were most common. For those wary of all the messages, the top giveaways were suspicious links, strange requests and unusual amounts of money being requested.
Although 93% of respondents had not experienced having their information stolen from an unsafe app in real life, 49% were fooled when trying to identify the real ChatGPT app out of six real but copycat options. According to the report, those who had fallen victim to app fraud in the past were much more likely to do so again.
According to the survey, ChatGPT can use easy-to-find personal information to generate lists of probable passwords to attempt to breach accounts. This is a problem for the one in four respondents who use personal information in their passwords, like birth dates (35%) or pet names (34%) that can be readily found on social media, business profiles and phone listings.
Read the full report here.